573 research outputs found

    Chunky and Equal-Spaced Polynomial Multiplication

    Get PDF
    Finding the product of two polynomials is an essential and basic problem in computer algebra. While most previous results have focused on the worst-case complexity, we instead employ the technique of adaptive analysis to give an improvement in many "easy" cases. We present two adaptive measures and methods for polynomial multiplication, and also show how to effectively combine them to gain both advantages. One useful feature of these algorithms is that they essentially provide a gradient between existing "sparse" and "dense" methods. We prove that these approaches provide significant improvements in many cases but in the worst case are still comparable to the fastest existing algorithms.Comment: 23 Pages, pdflatex, accepted to Journal of Symbolic Computation (JSC

    Multivariate sparse interpolation using randomized Kronecker substitutions

    Full text link
    We present new techniques for reducing a multivariate sparse polynomial to a univariate polynomial. The reduction works similarly to the classical and widely-used Kronecker substitution, except that we choose the degrees randomly based on the number of nonzero terms in the multivariate polynomial, that is, its sparsity. The resulting univariate polynomial often has a significantly lower degree than the Kronecker substitution polynomial, at the expense of a small number of term collisions. As an application, we give a new algorithm for multivariate interpolation which uses these new techniques along with any existing univariate interpolation algorithm.Comment: 21 pages, 2 tables, 1 procedure. Accepted to ISSAC 201

    Parallel sparse interpolation using small primes

    Full text link
    To interpolate a supersparse polynomial with integer coefficients, two alternative approaches are the Prony-based "big prime" technique, which acts over a single large finite field, or the more recently-proposed "small primes" technique, which reduces the unknown sparse polynomial to many low-degree dense polynomials. While the latter technique has not yet reached the same theoretical efficiency as Prony-based methods, it has an obvious potential for parallelization. We present a heuristic "small primes" interpolation algorithm and report on a low-level C implementation using FLINT and MPI.Comment: Accepted to PASCO 201

    Computing sparse multiples of polynomials

    Full text link
    We consider the problem of finding a sparse multiple of a polynomial. Given f in F[x] of degree d over a field F, and a desired sparsity t, our goal is to determine if there exists a multiple h in F[x] of f such that h has at most t non-zero terms, and if so, to find such an h. When F=Q and t is constant, we give a polynomial-time algorithm in d and the size of coefficients in h. When F is a finite field, we show that the problem is at least as hard as determining the multiplicative order of elements in an extension field of F (a problem thought to have complexity similar to that of factoring integers), and this lower bound is tight when t=2.Comment: Extended abstract appears in Proc. ISAAC 2010, pp. 266-278, LNCS 650

    POPE: Partial Order Preserving Encoding

    Get PDF
    Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption/encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(nϵ)O(n^\epsilon) non-persistent client storage for 0<ϵ<10<\epsilon<1, our POPE scheme provides extremely fast batch insertion consisting of a single round, and efficient search with O(1) amortized cost for up to O(n1−ϵ)O(n^{1-\epsilon}) search queries. This improved security and performance makes our scheme better suited for today's insert-heavy databases.Comment: Appears in ACM CCS 2016 Proceeding

    ObliviSync: Practical Oblivious File Backup and Synchronization

    Get PDF
    Oblivious RAM (ORAM) protocols are powerful techniques that hide a client's data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.Comment: 15 pages. Accepted to NDSS 201

    Detecting lacunary perfect powers and computing their roots

    Get PDF
    We consider solutions to the equation f = h^r for polynomials f and h and integer r > 1. Given a polynomial f in the lacunary (also called sparse or super-sparse) representation, we first show how to determine if f can be written as h^r and, if so, to find such an r. This is a Monte Carlo randomized algorithm whose cost is polynomial in the number of non-zero terms of f and in log(deg f), i.e., polynomial in the size of the lacunary representation, and it works over GF(q)[x] (for large characteristic) as well as Q[x]. We also give two deterministic algorithms to compute the perfect root h given f and r. The first is output-sensitive (based on the sparsity of h) and works only over Q[x]. A sparsity-sensitive Newton iteration forms the basis for the second approach to computing h, which is extremely efficient and works over both GF(q)[x] (for large characteristic) and Q[x], but depends on a number-theoretic conjecture. Work of Erdos, Schinzel, Zannier, and others suggests that both of these algorithms are unconditionally polynomial-time in the lacunary size of the input polynomial f. Finally, we demonstrate the efficiency of the randomized detection algorithm and the latter perfect root computation algorithm with an implementation in the C++ library NTL.Comment: to appear in Journal of Symbolic Computation (JSC), 201
    • …
    corecore